![]() DATA PROCESSING METHOD
专利摘要:
the present application discloses a data processing method and apparatus. in the method, after determining that an obtained preprocessing block is validated, a trusted protocol node can start validating a next preprocessing block ready for validation, so as to perform data processing on service data in the preprocessing block validated in parallel. after determining that the obtained preprocessing block is validated, the trusting protocol node starts validating the next preprocessing block ready for validation through parallel processing and processes the service data in the preprocessing block validated. that is, the trust protocol node implements parallel service data processing in a service consensus stage and a service dispatch stage. the trust protocol node can not only perform data processing on one part of the service data at the service push stage, but also perform consensus processing on the other part of the service data at the service consensus stage, to improve the efficiency of a system's service data processing. 公开号:BR112019008775B1 申请号:R112019008775-3 申请日:2018-05-31 公开日:2021-08-17 发明作者:Shifeng Wang 申请人:Advanced New Technologies Co., Ltd; IPC主号:
专利说明:
FIELD OF THE INVENTION [001] The present invention relates to the field of computer technologies and, in particular, to a method of data processing and apparatus. BACKGROUND OF THE INVENTION [002] With the continuous development of computer technologies, the scope of application of trusted protocol technologies (blockchain) has expanded. Today, many service models have become more efficient and secure due to the introduction of trusted protocol technologies in order to serve users more effectively. [003] In practical applications, service execution processes related to trust protocol technologies can be practically divided into three processes: [004] 1. Service handling stage: In this stage, a trust protocol node can receive process-ready service data (which may also be referred to as transaction data) sent by a user using a terminal or a client and store the service data after verifying the service data. Of course, at this stage, the trusting protocol node can also receive process-ready service data that is transmitted by another trusting protocol node and store the service data in the previously recorded manner. [005] 2. Service consensus stage: In this stage, if the trust protocol node is used as a master node that initiates consensus, the trust protocol node can obtain a portion of the service data from the service data stored, compress the service data part into a pre-processing block and transmit the pre-processing block to the other trusted protocol node to validate the pre-processing block. After receiving the preprocessing block, the other trusted protocol node in a consensus network can perform consensus checking on the service data in the preprocessing block based on the stored service data. Of course, if the trusting protocol node is not a master node, the trusting protocol node can receive a preprocessing block that is transmitted by the master node and perform consensus checking on a service request in the preprocessing block. -processing using a service request stored in a trusted protocol node's memory. [006] 3. Service submission stage: In this stage, after determining that the preprocessing block processed in the service consensus stage is validated, the trust protocol node can store the preprocessing block service data. processing in a trusted protocol. In addition, the trusting protocol node can store the service data in a specified database and release the service data from the pre-processing block from the trusting protocol node's storage space. [007] In existing technology, for the same piece of service data, the trust protocol node generally needs to first complete the service consensus stage before entering the service sending stage and the trust protocol node can validate a next preprocessing block ready for validation only after completing the service submission stage. [008] However, in existing technology, the service consensus stage and the service submission stage in a service data processing process are performed serially. The trust protocol node can start a service consensus stage of the next service data processing after completing a service submission stage of the current service data processing. As a result, the time interval between processing service data is inevitably increased and the efficiency of service processing of an entire system is reduced. DESCRIPTION OF THE INVENTION [009] An embodiment of the present application provides a data processing method so as to solve the current problem of low service processing efficiency in reliable protocol technologies. [010] An embodiment of the present application provides a method of processing data, including: obtaining, by a trusted protocol node, a pre-processing block ready for validation and validating the pre-processing block; and if it is determined that the pre-processing block is validated, starting to validate a next pre-processing block ready for validation and performing data processing on service data in the validated pre-processing block in parallel. [011] An embodiment of the present application provides a data processing apparatus so as to solve the current problem of relatively low consensus efficiency of service. [012] An embodiment of the present application provides a data processing apparatus, including: an acquisition module, configured to obtain a pre-processing block ready for validation, and validate the pre-processing block; and a processing module, configured to: if it is determined that the pre-processing block is validated, begin validating a next pre-processing block ready for validation and perform data processing on service data in the pre-processing block. parallel validated processing. [013] An embodiment of the present application provides a data processing apparatus so as to solve the current problem of relatively low consensus efficiency of service. [014] An embodiment of the present application provides a data processing apparatus, including a memory and at least one processor, where the memory stores a program, and the processor or processors are configured to perform the following steps: obtain a block pre-processing ready for validation and validating the pre-processing block; and if it is determined that the pre-processing block is validated, starting to validate a next pre-processing block ready for validation and starting to perform data processing on service data in the validated pre-processing block in parallel. [015] One or more of the described technical solutions used in the embodiments of this application can achieve the following beneficial effects: [016] In the embodiments of the present application, after determining that the obtained pre-processing block is validated, the trusted protocol node starts validating the next pre-processing block ready for validation through parallel processing and performs data processing on the service data in the validated pre-processing block. That is, the trust protocol node implements parallel service data processing in a service consensus stage and a service dispatch stage. The trust protocol node can not only perform data processing on one part of the service data at the service push stage, but also perform consensus processing on the other part of the service data at the service consensus stage. Therefore, a time interval between consensus processing in the service consensus stage can be reduced, so as to effectively improve a system's service data processing efficiency. BRIEF DESCRIPTION OF THE DRAWINGS [017] The figures described herein are used to provide a better understanding of this application and form a part of this application. Schematic embodiments of the present application and descriptions are used to explain the present application, which is not an improper limitation to the present application. In the figures: [018] Figure 1 is a schematic diagram illustrating a data processing process, according to an embodiment of the present application; [019] Figure 2 is a schematic diagram illustrating data processing performed by a trusted protocol node, according to an embodiment of the present application; [020] Figure 3 is a schematic diagram illustrating a data processing apparatus, according to an embodiment of the present application; and [021] Figure 4 is a flowchart illustrating an example of a computer-implemented method for improving the processing efficiency of trusted protocol technologies using parallel service data processing, in accordance with an implementation of the present disclosure. DESCRIPTION OF ACHIEVEMENTS OF THE INVENTION [022] To make a person skilled in the art better understand the technical solutions in this application, the following are clearly and completely described the technical solutions in the embodiments of this application with reference to the attached figures in the embodiments of this application request. Apparently, the described embodiments are merely a part and not all of the embodiments of the present application. All other embodiments obtained by a person skilled in the art based on embodiments of the present application without creative efforts shall fall within the scope of protection of the present application. [023] Figure 1 is a schematic diagram illustrating a data processing process, according to an embodiment of the present application. The data processing process includes the following steps. [024] (S101). A trusted protocol node obtains a preprocessing block ready for validation and validates the preprocessing block. [025] In this embodiment of the present application, the trusting protocol node can obtain, in a service consensus stage, a preprocessing block in the current consensus cycle (here a currently obtained preprocessing block is referred to as a preprocessing block obtained from the current consensus). The pre-processing block can be generated by the trusting protocol node based on the service data stored by the trusting protocol node or it can be obtained from another trusting protocol node. [026] (S102). If it is determined that the preprocessing block is validated, start validating a next preprocessing block ready for validation and perform data processing on service data in the validated preprocessing block in parallel. [027] After determining that the consensus check of a pre-processing block ready for current validation is successful, the described trust protocol node can perform, through parallel processing, data processing in a stage of sending service in the validated preprocessing block. Therefore, when the consensus processing in the service consensus stage is performed in a next pre-processing block, data processing in the service sending stage can be performed effectively and synchronously in the validated pre-processing block. [028] It can be observed that, in this embodiment of the present order and in the service data processing process, a service consensus node synchronously performs the consensus processing in the service consensus stage and the processing of data at the service submission stage. That is, suppose there are at least two preprocessing blocks ready for validation. Therefore, based on the technical solutions provided in this application, when data processing is performed in a pre-processing block validated at the service submission stage, consensus processing can be performed synchronously in a pre-processing block. consensus processing failed in the service consensus stage. [029] It should be noted that for the current validation ready preprocessing block it is determined that when consensus processing is performed on the current validation ready preprocessing block. Consensus processing begins to be performed on the next preprocessing block ready for validation and the processing parameters of a current validated preprocessing block are taken. Therefore, a processor (which may be referred to as a predetermined processor later) configured to implement data processing at a service dispatch stage in a service data processing process performs, based on the processing parameters, data processing in the current validated preprocessing block. [030] For example, when starting to perform consensus processing in the next pre-processing block ready for validation, the predetermined processor performs, based on a generated processing parameter, data processing in the validated pre-processing block current. It can be understood here that consensus processing and data processing are performed respectively in the next pre-processing block ready for validation and in the current and parallel validated pre-processing block, so that a time interval between processing consensus performed on the current validation-ready preprocessing block and the consensus processing performed on the next validation-ready preprocessing block is effectively shortened. [031] For another example, when starting the execution of consensus processing in the next preprocessing block ready for validation, the predetermined processor performs data processing in the current validated preprocessing block and then stores the block validated preprocessor current in a predetermined queue for hold. The predetermined processor successively performs, based on a rule (for example, based on a consensus completion time sequence), processing data in validated preprocessing blocks stored in the queue. It can be understood here that the consensus processing of the preprocessing block ready for current validation and the data processing of the preprocessing block are completed asynchronously. [032] For example, suppose there are three preprocessing blocks: A, B and C, and the three preprocessing blocks are successively sent to the trusted protocol node in an alphabetical sequence for consensus. After determining that a pre-processing block A is validated, the trusted protocol node can perform data processing in pre-processing block A using the predetermined processor. In addition, the trusted protocol node can validate a pre-processing block B. After determining that pre-processing block B is validated and discovering that the data processing performed in pre-processing block A is not complete, the trusted protocol node can store the validated preprocessing block B in a predetermined queue to wait and continue validating a preprocessing block C. By determining that the data processing performed in preprocessing block A is Once completed, the trusted protocol node can extract the pre-processing block B from the predetermined queue, to perform data processing on the pre-processing block B using the predetermined processor. [033] Therefore, for each preprocessing block, consensus processing and data processing of the preprocessing block are completed asynchronously. For different preprocessing blocks, consensus processing of one preprocessing block and data processing of another validated preprocessing block can be performed synchronously. [034] After the trust protocol node determines that the consensus check of the current validation ready preprocessing block is successful, this embodiment of the present application includes, but is not limited to, performing both types of operations to follow: [035] 1. Operation of the first type: Determine a processing parameter corresponding to the current validated preprocessing block. The processing parameter includes a parameter used to process service data in the pre-processing block so that the predetermined processor can process the pre-processing block based on the processing parameter, thus completing a related operation in the sending stage. of service. The operation of the first type is described in detail below. [036] Processing parameter can include, but is not limited to, a storage parameter, a release parameter, a delete parameter, and a co-string parameter. However, the above is just a simple example used to describe some parameters in the processing parameter. In practical application, the processing parameter can include still other parameters, and this can be determined based on a specific operation performed by the trusting protocol node at the service sending stage. [037] For example, the release parameter is used to instruct to release a validated preprocessing block from storage space. [038] The storage parameter is used to instruct to store service data from a validated preprocessing block at a specified location. Different storage parameters are determined for different preprocessing blocks. The storage parameter includes a storage location. [039] The delete parameter is instructed to delete a message (for example, a pre-prep message, a prepare message and a confirmation message in the PBFT consensus) generated by a preprocessing block validated in one stage of service consensus to reduce storage pressure. [040] The co-chain parameter is used to instruct to co-chain, in a block format based on a header hash scatter from a previous block into a preprocessing block, the preprocessing block. processing for a trust protocol in which the previous block is located. [041] Preferably, in this embodiment of the present application, by validating the next pre-processing block ready for adjacent validation, the trusting protocol node can further process service data in the current validated pre-processing block in parallel using the predetermined processor. [042] Suppose that in existing technology, it can be understood that the trust protocol node completes a service consensus stage and a service dispatch stage in a service data processing process using the same thread. The trusting protocol node first needs to complete the service consensus stage in this service data processing process using the thread and then run the service sending stage in that service data processing process using the thread. Apparently, in existing technology, the trust protocol node performs the service consensus stage and the service push stage in the serial service data processing process. As a result, a time gap between adjacent service data processing is increased and the efficiency of service data processing is reduced. [043] To effectively solve the described problem, in this embodiment of the present application, the trusted protocol node pre-configures a processor (the processor can operate via asynchronous processing, and no specific limitations are imposed here). The processor can be configured to perform an operation involved in the service submission stage. That is, in this embodiment of the present application, the trust protocol node can implement, respectively, consensus processing at the service consensus stage and data processing at the service sending stage in the service data processing process using two filaments. One strand is used to perform consensus processing on a preprocessing block ready for validation and the other strand is used to perform data processing on a validated preprocessing block. Therefore, for the same preprocessing block, consensus processing and data processing are completed asynchronously. [044] As such, when the trusting protocol node performs, using the processor, the operation involved in the service sending stage, the trusting protocol node can begin, unaffected, to validate the next pre-block. processing ready for adjacent validation, that is, starting to perform the next consensus, in order to considerably shorten a time interval between adjacent consensuses, thus improving the consensus efficiency. [045] In this embodiment of the present application, all the processing parameters determined in the current consensus can be obtained using processing parameters determined in the previous consensus. [046] Storage parameter is used as an example for description. After determining that the current validated ready preprocessing block is validated, the trusted protocol node can determine, based on a storage parameter of the current validated preprocessing block, a storage parameter of a next block preprocessor ready for adjacent validation and store the storage parameter of the next preprocessor ready for adjacent validation block. [047] For example, suppose the storage parameter corresponding to the current validated preprocessing block (which can also be understood as a storage parameter corresponding to the current consensus) defines the service data in the validated preprocessing block data need to be stored in a table a in a database of relationship type A. Therefore, the trust protocol node can determine, based on an alphabetical sequence of each table in the database of relationship type A, that the storage parameter corresponding to the next adjacent validated preprocessing block is: storing, in a table b in the database of relationship type A, service data in a preprocessing block whose current consensus (ie, the next adjacent consensus mentioned above) is successful. [048] The trust protocol node may store a determined storage parameter of the next consensus. Therefore, when it is determined that the next preprocessing block ready for validation is validated, the predetermined processor can determine, based on the obtained storage parameter, a table and a database that service data in the preprocessing block. validated processing need to be stored. [049] Preferably, in this embodiment of the present application, after determining that the current validation-ready preprocessing block is validated, the trusted protocol node can also determine, based on the preprocessing block and parameter of preprocessing block storage, a storage parameter of a next preprocessing ready block adjacent to validation, and storing the storage parameter of the next preprocessing ready block adjacent to validation. [050] Specifically, if the storage location in the storage parameter exists in the form of a base pointer. A location pointed to by the base pointer is a service data storage location in a preprocessor block. After determining that the current validation-ready preprocessing block is validated, the trust protocol node can use a current base pointer location as a starting point, move a base pointer location based on the preblock size. -current validation ready processing and determining that a new base pointer location is a storage parameter corresponding to the next consensus, which is the storage parameter of the next adjacent validation ready preprocessing block. [051] For example, suppose a processing parameter of each consensus includes a storage parameter in the form of a base pointer. The base pointer (that is, the storage parameter) points to a specific storage location for the service data. An initial value of the base pointer can be set to 0. After each consensus, the trust protocol node can determine, based on the size of a validated preprocessing block and a base pointer on a processing parameter corresponding to the consensus current, a specific value of the base pointer in a processing parameter corresponding to the next adjacent consensus. During the first consensus, the trusting protocol node determines that a validated preprocessing block is 1024 bytes, so that the trusting protocol node can determine, based on the determined preprocessing block size and the initial value. 0 of the base pointer, a base pointer on a processing parameter corresponding to the second consensus is 1024 bytes and stores the base pointer. Correspondingly, in the second consensus, the trusted protocol node can store, using the processor, service data from a validated pre-processing block in a storage location corresponding to the base pointer of 1024 bytes. [052] In the second consensus, the trusted protocol node determines that the validated preprocessing block is 10 bytes, so that the trusted protocol node can determine, based on the determined size of the validated preprocessing block and in the 1024-byte base pointer in the processing parameter corresponding to the previous consensus (ie, the first consensus), that a base pointer in a processing parameter corresponding to the third consensus is 1034 bytes and store the base pointer. Correspondingly, in the third consensus, the trusted protocol node can store, using the processor, service data from a validated pre-processing block in a storage location corresponding to the base pointer of 1034 bytes, and subsequent consensus can be deduced. [053] It should be noted that, in this embodiment of the present application, the storage parameter in the processing parameters corresponding to the next consensus can be determined based on a current validated preprocessing block size in addition to other information about the preprocessing block. The information to be used to determine the storage parameter can be determined by a trusted protocol node operation and maintenance engineer. Details are not repeatedly described here. [054] It should be noted that, for different parameters in the processing parameters, an operation performed by the trust protocol node at the service sending stage determines whether these parameters need to change properly after each consensus. For example, for the storage parameter described, as the service data in a preprocessing block involved in each consensus cannot be stored in the same storage location, the storage parameter needs to be changed accordingly after each consensus. For the described co-string parameter, regardless of the consensus related to a preprocessing block, all preprocessing blocks need to be stored in a trusted protocol in a block format, since the consensus in a consensus network is successful. That is, regardless of a preprocessing block, all processors need to perform a cochain operation on the preprocessing block as soon as the preprocessing block is validated. Therefore, after each consensus, the cochain parameter need not change accordingly. For each consensus, the cochain parameter can be consistent. [055] Preferably, in this embodiment of the present application, the trusted protocol node can store the processing parameter obtained from the next consensus in the predetermined queue. For example, a storage parameter of a preprocessing block submitted to the next adjacent service consensus is stored in the predetermined queue. [056] Thus, the processor can obtain the processing parameter of the predetermined queue (that is, obtain the processing parameter of the validated preprocessing block) to store, based on the storage parameter in the processing parameters, the data of service in the validated preprocessing block corresponding to the storage parameter. [057] The predetermined queue mentioned here can be a first in, first out (FIFO) queue or it can be a queue of another type. No specific limitations are imposed here. The processor can obtain a processing parameter stored in the FIFO queue and determine, from the validated preprocessing block stored based on the storage parameter in the processing parameters, a preprocessing block corresponding to the storage parameter, to store the pre-processing block service data based on the storage parameter. [058] Specifically, the processor can obtain the storage parameter from the described FIFO queue, and then the processor can further determine a ready-to-process preprocessing block corresponding to the storage parameter. For example, when the current validation-ready preprocessing block is validated, the trust protocol node generates a storage parameter from the preprocessing block and determines a match between the preprocessing block and the storage parameter . Therefore, the processor can determine, based on the match, the ready-to-process preprocessing block corresponding to the storage parameter. For another example, when the current validation ready preprocessing block is validated, the trust protocol node generates a storage parameter from the preprocessing block, determines a first time when the storage parameter is generated, determines a second moment in which the current validation ready preprocessing block is validated and establishes a correspondence between the first and second moments. Therefore, the processor can search, based on the generation time of the storage parameter, for a pre-processing block corresponding to a final consensus processing time that satisfies a condition determined at the generation time. It can be determined that the preprocessing block found is the ready-to-process preprocessing block corresponding to the storage parameter. For another example, the processor gets, from the FIFO queue, a storage parameter in front of a queue output. Therefore, the processor looks for a validated preprocessing block with a storage location further from the trusted protocol node's storage space and determines that the preprocessing block is a corresponding preprocessing block. to the storage parameter. [059] After determining the ready-to-process preprocessing block corresponding to the storage parameter, the processor can store, based on the storage parameter, service data in the ready-to-process preprocessing block in a specified storage location by the storage parameter. [060] It should be noted that, in this embodiment of the present application, in addition to using the FIFO queue, the trusted protocol node can still store each processing parameter using another queue, for example, a double-ended queue. Details are not repeatedly described here. [061] The processor may store, based on the storage parameter in the obtained processing parameters, the service data of the ready-to-process preprocessing block corresponding to the storage parameter. In addition, the processor can perform yet another operation on the ready-to-process preprocessing block based on another parameter in the processing parameters. [062] For example, the processor can flush the service data in the ready-to-process preprocessing block from the storage space based on the release parameter in the processing parameters. For another example, the processor may exclude, based on the exclusion parameter in the obtained processing parameters, a pre-preparation message, a prepare message, a confirmation message etc. generated in the service consensus stage in the current consensus, so as to save the trusted protocol node storage space. Of course, the processor can still perform another operation based on another parameter in the processing parameters, and the details are not repeatedly described here. [063] 2. Second Type Operation: Update, based on a consensus parameter matching the current consensus, a consensus parameter matching the next consensus. That is, the trust protocol node can determine a consensus parameter corresponding to the current validated preprocessing block and obtain, based on the determined consensus parameter, a consensus parameter corresponding to the next preprocessing block ready for validation adjacent. [064] It should be noted that, in this embodiment of the present application, after determining that the consensus check of a preprocessor block that needs to be validated in the current denial is validated, the trust protocol node needs to obtain and storing, based on a processing parameter corresponding to the current consensus, a processing parameter corresponding to the next consensus. In addition, the trust protocol node can still update, based on the consensus parameter corresponding to the current consensus, the consensus parameter corresponding to the next consensus. That is, the trusted protocol node can determine the consensus parameter corresponding to the current validated preprocessing block and obtain, based on the determined consensus parameter, the consensus parameter corresponding to the next preprocessing block ready for validation adjacent. [065] The consensus parameter mentioned here can be understood as attribute information corresponding to a single consensus. For example, the PBFT consensus is used as an example for description. In a PBFT consensus process, the unique consensus usually corresponds to a view number v, and the view number v is used to uniquely identify that consensus. In a single consensus, regardless of which trusted protocol node in a consensus network is used as a master node, a header scatter from a previous block in a preprocessing block generated by the trusted protocol node is generally a header scatter of the last current block in the trust protocol. The view number v and the header spread from the previous block mentioned here can be referred to as consensus parameters corresponding to the current consensus. [066] Of course, in addition to the view number v and the block header sparse described above, the consensus parameter can include still other information. For different forms of consensus, there is a specific difference in content in the consensus parameter. Details are not described. [067] After determining that the consensus check of the current validation ready preprocessing block is successful, the trust protocol node can still determine the consensus parameter corresponding to the current consensus to obtain a corresponding consensus parameter to the next consensus by updating the consensus parameter, that is to obtain the consensus parameter corresponding to the next preprocessing block ready for adjacent validation. [068] A PBFT consensus form is still used as an example. Suppose the consensus parameter corresponding to the current consensus includes the view number v and the header spread from the previous block. The view number v is 16, and the header scatter from the previous block is 0929d9sldom23oix239xed. After determining that the current validation ready preprocessing block consensus check is successful, the trust protocol node can update view number 16 to 17 and update to 679xx9a9a8dfa23389xx34 based on a block header scatter of preprocessing from 679xx9a9a8dfa23389xx34, a header spread from one block to the next consensus. As such, the consensus parameter corresponding to the next consensus is: view number v is 17, and the previous block header spread is 679xx9a9a8dfa23389xx34. [069] The trust protocol node can obtain, based on the consensus parameter matching the current consensus, the consensus parameter matching the next consensus. As such, the trusted protocol node can start, based on the obtained consensus parameter corresponding to the next consensus, to validate the next pre-processing block ready for adjacent validation. The consensus parameter mentioned here can be stored in memory, it can be stored in a database corresponding to the trusted protocol node, or it can certainly exist in a globally variable way. [070] It should be noted that, in this embodiment of the present application, the consensus parameter may also be stored in the predetermined queue corresponding to the processing parameter. As such, a processor configured to perform consensus processing can obtain a consensus parameter from the predetermined queue and initiate a new consensus processing based on the consensus parameter. A processor configured to perform data processing obtains a processing parameter from the predetermined queue and begins, based on the obtained processing parameter, to perform data processing on service data in a preprocessing block corresponding to the processing parameter . [071] For example, suppose the processor obtains, from the FIFO queue, a processing parameter and a consensus parameter corresponding to the processing parameter, and the consensus parameter includes the view number v. Thus, the processor can determine, from the storage space of the trusted protocol node, a ready-to-process preprocessing block corresponding to the view number v, so as to process the ready-to-process preprocessing block with based on the processing parameter obtained. [072] Of course, in this embodiment of the present application, the processing parameter cannot exist in the predetermined queue described. For example, the processing parameter can be stored in the memory of the trusted protocol node, it can be stored in a database corresponding to the trusted protocol node, or it can be stored elsewhere in the trusted protocol node. Details are not repeatedly described here. [073] It can be seen from the previous method that, after determining that the obtained preprocessing block is validated, the trust protocol node starts validating the next preprocessing block ready for validation through parallel processing and processes the service data in the validated pre-processing block. That is, the trust protocol node implements parallel service data processing in a service consensus stage and a service dispatch stage. The trust protocol node can not only perform data processing on one part of the service data at the service push stage, but also perform consensus processing on the other part of the service data at the service consensus stage. Therefore, a time lag between adjacent consensus processing in the service consensus stage can be reduced, so as to effectively improve a system's service data processing efficiency. [074] As shown in Figure 2, to further describe the data processing method mentioned in this application, below are briefly described in detail all the processes involved in the data processing method. [075] Figure 2 is a schematic diagram illustrating data processing performed by a trusted protocol node, according to an embodiment of the present application. [076] In a service manipulation stage, a user can send service data to a trusted protocol node using a client installed on a terminal, and the trusted protocol node can verify the received service data and store the service data checked into the storage space corresponding to the trusted protocol node. [077] In a service consensus stage, the trusting protocol node can get a preprocessing block ready for actual validation. If the trusting protocol node is used as a master node that initiates the current consensus, the trusting protocol node can take a portion of the service data from its storage space and compress the portion of the service data into a block of pre-processing. In this situation, the trusting protocol node gets a preprocessing block that needs to be validated against the current consensus. In addition, the trusting protocol node needs to pass the preprocessing block to another trusting protocol node in a consensus network so that the other trusting protocol node performs a consensus check on the preprocessing block. processing. [078] If the trusting protocol node is not a master node that initiates the current consensus, the trusting protocol node can obtain, from the master node that initiates the current consensus, a preprocessing block that needs to be validated on this service consensus and then perform consensus check on the preprocessing block. [079] After determining that the preprocessing block consensus check is successful, the trust protocol node can obtain, by updating based on a consensus parameter corresponding to the current consensus (ie, a consensus parameter corresponding to preprocessing block), a consensus parameter corresponding to the next consensus (that is, a consensus parameter corresponding to a next preprocessing block ready for adjacent validation), to perform the next consensus. In addition, the trusted protocol node can further obtain, based on the preprocessing block and a processing parameter corresponding to the current consensus (ie, a processing parameter corresponding to a current validated preprocessing block), a processing parameter corresponding to the next consensus (that is, a processing parameter corresponding to the next preprocessing block ready for adjacent validation) and storing, in a FIFO queue, the obtained processing parameter corresponding to the next consensus. [080] By getting the consensus parameter and the processing parameter corresponding to the next adjacent consensus, the trusting protocol node can start executing a service consensus stage on the next adjacent consensus, that is, start validation of the next block preprocessor ready for adjacent validation. Also, by starting to execute the service consensus stage on the next adjacent consensus, the trusting protocol node can perform a service push stage on the current consensus in parallel, using a processor. [081] That is, the trusting protocol node sends an operation involved in the service sending stage to the processor for completion, and the trusting protocol node can execute the next adjacent consensus, so as to implement one-stage parallel processing of service consensus and a service submission stage in a single consensus. Therefore, a time interval between adjacent consensuses is shortened in order to improve the efficiency of the consensus. [082] The processor can obtain, from the FIFO queue, the processing parameter corresponding to the current consensus and then store the current validated preprocessing block in a trusted protocol in a block format based on a parameter of co-chain in the processing parameters. The processor can release, based on a yes-or-no parameter and a storage parameter in the processing parameters, the pre-processing block service data from the trusted protocol storage space and store the data. services released to a corresponding storage location based on a specification in the storage parameter. The processor can delete, based on an exclude parameter in this processing attribute, for example, a pre-preparation message, a prepare message, or a PBFT consensus confirmation message generated in the service consensus stage, to save the trusted protocol node storage space. [083] The above is the data processing method provided in an embodiment of this application. As shown in Figure 3, based on the same idea, an embodiment of the present application further provides a data processing apparatus. [084] Figure 3 is a schematic diagram illustrating a data processing apparatus, according to an embodiment of the present application. The apparatus includes: an acquisition module (301), configured to obtain a pre-processing block ready for validation, and validate the pre-processing block; and a processing module (302) configured to: if it is determined that the pre-processing block is validated, begin validating a next pre-processing block ready for validation and perform data processing on service data in the block of preprocessing validated in parallel. [085] The processing module (302) performs data processing on the service data in the validated pre-processing block in parallel using a predetermined processor. [086] For the validated pre-processing block, the processing module (302) performs the following operations: invoking a processor to obtain a storage parameter, wherein the storage parameter includes a storage location; determining, based on the storage parameter, a process-ready preprocessing block corresponding to the storage parameter; and storing service data from the pre-processing block ready for the determined process in the storage location. [087] After it is determined that the preprocessing block is validated, the acquisition module (301) determines, based on the preprocessing block and the storage parameter of the preprocessing block, a storage parameter from an adjacent next preprocessing ready for validation block and stores the storage parameter of the next adjacent preprocessor ready for validation block. [088] The acquisition module (301) stores the storage parameters of the next pre-processing block ready for validation adjacent to a first-in, first-out (FIFO) queue. [089] The processing module (302) invokes the processor to obtain the FIFO queue storage parameter. [090] If it is determined that the pre-processing block is validated, the acquisition module (301) determines a consensus parameter corresponding to the pre-processing block and obtains, based on the determined consensus parameter corresponding to the pre-processing block. - processing, a consensus parameter corresponding to a next adjacent validation-ready preprocessing block, where the consensus parameter is used to instruct a trusted protocol node to validate the validation-ready preprocessing block. [091] When the consensus parameter corresponding to the next preprocessing block point for adjacent validation is obtained, the processing module (302) starts, based on the obtained consensus parameter, to validate the next ready preprocessing block for adjacent validation. [092] Based on the same idea, an embodiment of the present application provides yet another data processing apparatus. Specifically, the apparatus includes: a memory and at least one processor, wherein the memory stores a program, and the at least one processor is configured to perform the following steps: obtain a pre-processing block ready for validation and validate the data block. pre-processing; and if it is determined that the preprocessing block is validated, starting to validate a next preprocessing block ready for validation and performing data processing on service data in the validated preprocessing block in parallel. [093] For a specific operation performed by the processor using the program stored in memory, reference may be made to the contents recorded in the described embodiment. Details are not repeated here. [094] In the embodiment of the present application, after determining that the obtained pre-processing block is validated, the trusted protocol node can start validating the next pre-processing block ready for validation, to perform the processing of data in the service data in the preprocessing block validated in parallel. After determining that the obtained pre-processing block is validated, the trusting protocol node starts validating the next pre-processing block ready for validation by means of parallel processing and processes the service data in the pre-processing block. validated. That is, the trust protocol node implements parallel service data processing in a service consensus stage and a service dispatch stage. The trust protocol node can not only perform data processing on one part of the service data at the service push stage, but also perform consensus processing on the other part of the service data at the service consensus stage. Therefore, a time interval between consensus processing in the service consensus stage can be reduced, so as to effectively improve a system's service data processing efficiency. [095] In the 1990s, whether technology improvement is hardware improvement (eg improvement of a circuit structure such as a diode, transistor or switch) or software improvement (improvement of a method procedure ) can be obviously distinguished. However, as technologies evolve, the improvement of many procedures from current methods can be considered as a direct improvement of a hardware circuit structure. A designer often programs an enhanced method procedure for a hardware circuit to obtain a corresponding hardware circuit structure. Therefore, a hardware entity module can improve a method procedure. For example, a programmable logic device (PLD) (eg, a field-programmable port network (FPGA)) is an integrated circuit, and a programmable logic device's logic function is determined by a user through device programming. The designer performs programming to “integrate” a digital system into a PLD without asking a chip manufacturer to design and produce an application-specific integrated circuit chip. Furthermore, programming is mainly implemented by modifying the software “logic compiler” rather than manually making an integrated circuit chip. This is similar to a software compiler used to develop and compose a program. However, the original code obtained before compilation is also written in a specific programming language, and this is called a hardware description language (HDL). However, there are several HDLs, such as an ABEL (Advanced Boolean Expression Language), an AHDL (Alter Hardware Description Language), Confluence, a CUPL (Cornell University Programming Language), HDCal, a JHDL (Language of Java hardware description), Lava, Lola, MyHDL, PALASM and an RHDL (Ruby Hardware Description Language). Currently, a VHDL (High Speed Integrated Circuit Hardware Description Language) and Verilog are the most popular. [096] A person skilled in the art should also understand that, only logic programming needs to be performed in the method procedure using the various hardware description languages described, and the various hardware description languages are programmed to an integrated circuit, so that a hardware circuit that implements the logical method procedure can be easily obtained. [097] A controller can be implemented in any appropriate way. For example, the controller can use a microprocessor or a processor and can store forms on a computer-readable medium, a logic gate, a switch, an application-specific integrated circuit (ASIC), a programmable logic controller, and an embedded microcontroller that are of computer-readable program code (eg, software or hardware) that can be executed by the (micro)processor. Examples of controllers include, but are not limited to, the following microcontrollers: ARC 625D, Atmel AT91SAM, Microchip PIC18F26K20, or Silicone Labs C8051F320. A memory controller can also be implemented as part of the memory control logic. A person skilled in the art also knows that, in addition to implementing the controller in a pure computer-readable program code manner, logic programming can be completely executed using the method step, so that the controller implements the same function in the form of a logic gate, a switch, an application specific integrated circuit, a programmable logic controller, an embedded microcontroller, etc. Therefore, the controller can be considered as a hardware component, and an apparatus to implement various functions in the controller can also be considered as a structure in a hardware component. Alternatively, an apparatus configured to implement several functions can be thought of as a software module or a structure in a hardware component that can implement the method. [098] The system, apparatus, module or unit described in the described embodiments can be implemented specifically by a computer chip or an entity, or implemented by a product with a function. A typical deployment device is a computer. Specifically, the computer can be, for example, a personal computer, a laptop, a cell phone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, a computer device, e-mail, a game console, a tablet computer, or a usable device, or a combination of any of these devices. [099] For ease of description, the described device is described by dividing the functions into several units. Of course, when the present application is implemented, the functions of each unit can be implemented in one or more pieces of software and/or hardware. [0100] One skilled in the art should understand that embodiments of the present invention may be provided as a method, a system or a computer program product. Therefore, the present invention may utilize an embodiment of only hardware embodiments, only software embodiments, or embodiments with a combination of software and hardware. Furthermore, the present invention may use a form of computer program product that is implemented on one or more computer-usable storage media (including, but not limited to, a disk memory, a CD-ROM, an optical memory, etc.) which include computer-usable program code. [0101] The present invention is described with reference to the flowcharts and/or block diagrams of the method, device (system) and computer program product according to embodiments of the present invention. It should be understood that computer program instructions can be used to implement each process and/or each block in flowcharts and/or block diagrams and a combination of a process and/or block in flowcharts and/or block diagrams . These computer program instructions can be provided to a general purpose computer, a dedicated computer, an embedded processor or a processor of any other programmable data processing device to generate a machine, so that the instructions executed by a computer or a The processor of any other programmable data processing device generates an apparatus to implement a specific function in one or more processes in flowcharts or in one or more blocks in block diagrams. [0102] These computer program instructions may be stored in computer readable memory that can instruct the computer or any other programmable data processing device to function in a specific manner so that instructions stored in computer readable memory generate an artifact that includes a teaching apparatus. The instruction apparatus implements a specific function in one or more processes in the flowcharts and/or in one or more blocks in the block diagrams. [0103] These computer program instructions can be loaded into a computer or other programmable data processing device, so that a series of operations and steps are performed on the computer or other programmable device, thus generating computer-implemented processing. Therefore, instructions executed on the computer or other programmable device provide steps to implement a specific function in one or more processes in flowcharts or in one or more blocks in block diagrams. [0104] In the typical configuration, the computing device includes one or more processors (CPU), an input/output interface, a network interface, and a memory. [0105] Memory may include some form of volatile memory, random access memory (RAM) and/or non-volatile memory, etc. on a computer-readable medium, such as read-only memory (ROM) or flash memory (flash RAM). Memory is an example of the computer-readable medium. [0106] Computer readable media includes volatile and non-volatile, removable and non-removable media, and may store information using any method or technology. The information can be a computer-readable instruction, a data structure, a program module, or other data. A computer storage medium includes, but is not limited to, parameter random access memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), random access memory (RAM) of another type, read-only memory (ROM), an electrically erasable programmable read-only memory (EEPROM), a flash memory or other memory technology, a compact disk read-only memory (CD-ROM), a versatile disk digital (DVD) or other optical storage, a magnetic tape, a magnetic disk storage, other magnetic storage device, or any other non-transmission medium. The computer's storage medium can be used to store information that can be accessed by the computing device. As described in this specification, computer readable media does not include transient media, for example, a modulated data signal and a carrier. [0107] Furthermore, it should be noted that the terms "include", "contain", or any other variant thereof, are intended to encompass non-exclusive inclusion, such that a process, a method, an article or a device that includes a series of elements not only includes those same elements, but also includes other elements that are not expressly listed, or even include elements inherent in such process, method, article or device. An element preceded by “includes a ...” does not, without further restrictions, preclude the existence of additional identical elements in the process, method, article or device that includes the element. [0108] One skilled in the art should understand that embodiments of the present application may be provided as a method, a system, or a computer program product. Therefore, the present application may use an embodiment of only hardware embodiments, only software embodiments, or embodiments with a combination of software and hardware. In addition, the present application may use a form of computer program product that is implemented on one or more computer-usable storage media (including, but not limited to, a disk memory, a CD-ROM, an optical memory, etc. .) which includes computer-usable program code. [0109] The present application can be described in common contexts of computer executable instructions executed by a computer, such as a program module. Generally, the program module includes a routine, a program, an object, a component, a data structure, etc., performing a specific task or implementing a specific abstract data type. The present application can also be practiced in distributed computing environments. In these distributed computing environments, tasks are performed by remote processing devices that are connected using a communication network. In distributed computing environments, the program module can be located on local and remote computer storage media that include storage devices. [0110] The embodiments in this descriptive report are all described in a progressive manner, for equal or similar parts in the embodiments, reference can be made to these embodiments, and each embodiment focuses on a difference of other embodiments. In particular, an embodiment of the system is basically similar to an embodiment of the method and is therefore described briefly; for related parties, reference may be made to partial descriptions in the method embodiment. [0111] The foregoing descriptions are merely embodiments of this application, and are not intended to limit this application. For a person skilled in the art, the present application may have various modifications and changes. Any modifications, equivalent replacements, improvements etc. made within the scope and principle of this application, shall be within the scope of protection of this application. [0112] Figure 4 is a flowchart illustrating an example of a computer-implemented method (400) for improving the processing efficiency of trusted protocol technologies using parallel service data processing, in accordance with an implementation of the present disclosure. For clarity of presentation, the following description generally describes method 400 in the context of the other figures in this description. However, it will be understood that method 400 can be performed, for example, by any system, environment, software and hardware, or a combination of systems, environments, software and hardware, as appropriate. In some implementations, several steps of method 400 may be performed in parallel, in combination, in cycles, or in any order. [0113] In (402), a preprocessing block ready for validation in a current consensus cycle is obtained by a trusted protocol node and in a service consensus stage. In some implementations, getting the preprocessing block ready for validation includes the trusting protocol node generating the preprocessing block ready for validation based on the service data stored by the trusting protocol node or getting the block of preprocessing ready for validation from another trusted protocol node. From (402), the method (400) proceeds to (404). [0114] In (404), the preprocessing block ready for validation is validated. From (404), method (400) proceeds to (406). [0115] In (406), a determination is made to determine if the preprocessing block ready for validation is validated. If it is determined that the preprocessing block ready for validation is not validated, method (400) returns to (404). Otherwise, if it is determined that the preprocessing block ready for validation is validated, method (400) proceeds, in parallel, for both (408) and (410). [0116] In (408), validation is started in a next pre-processing block ready for validation. In some implementations, after it is determined that the validation-ready preprocessing block is validated: 1) determine, based on the validation-ready preprocessing block and the validation-ready preprocessing block storage parameter, a storage parameter of a next preprocessing block ready for adjacent validation; and 2) store the storage parameter of the next preprocessing block ready for adjacent validation. In some implementations, if it is determined that the preprocessing block ready for validation is validated: 1) determine a consensus parameter corresponding to the preprocessing block ready for validation; and 2) obtain, based on the consensus parameter, a consensus parameter corresponding to a next adjacent validation-ready preprocessing block, where the consensus parameter is used to instruct the trusted protocol node to validate the next preprocessing block ready for adjacent validation. In some implementations, validation is initiated on the next adjacent validation-ready preprocessing block when the consensus parameter corresponding to the next adjacent validation-ready preprocessing block is obtained. [0117] In (410), parallel data processing is performed on the service data stored in the pre-processing block ready for validated validation. In some implementations, parallel data processing is performed on service data stored in the pre-processing block ready for validation validated using a predetermined processor. In some implementations, parallel data processing using the predetermined processor includes, for the validated-ready pre-processing block validated: 1) invoking a processor to obtain a storage parameter comprising a storage location; 2) determine, based on the storage parameter, a process-ready preprocessing block corresponding to the storage parameter; and 3) store, at the storage location, service data from the ready-to-process pre-processing block. [0118] Implementations of the subject matter described in this descriptive report can be implemented in order to achieve particular advantages or technical effects. For example, implementations of the object matter described allow for greater processing efficiency of reliable protocol technologies using parallel service data processing. In turn, more efficient processing can help improve overall data security. The described parallel service data processing also allows for more efficient use of computer resources (eg processing cycles and memory usage) as well as faster processing. At least these actions can minimize or avoid wasting available computer resources with respect to trust protocol-based transactions. In some cases, network transaction speed can be increased due to more efficient reliable protocol processing. [0119] In some implementations, trusted protocol data can be sent between computing devices and include graphical information (eg, for use in a graphical user interface). In these implementations, elements of a graphical user interface running on one or more computing devices can be positioned to be less intrusive to a user. For example, elements can be positioned to hide the least amount of data and avoid covering any critical or frequently used graphical user interface elements. [0120] The embodiments and operations described in this specification may be implemented in digital electronic circuits, or in computer software, firmware or hardware, including the structures disclosed in this specification or in combinations of one or more of them. Operations may be implemented as operations performed by a data processing apparatus on data stored on one or more computer readable storage devices or received from other sources. A data processing apparatus, computer or computing device may encompass data processing apparatus, devices and machines, including, for example, a programmable processor, a computer, a system on a chip, or various or combinations of the foregoing. The apparatus may include special purpose logic circuitry, for example, a central processing unit (CPU), a field-programmable gate network (FPGA), or an application-specific integrated circuit (ASIC). The apparatus may also include code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system (e.g. , an operating system or a combination of operating systems), a cross-platform runtime environment, a virtual machine, or a combination of one or more of them. The appliance and execution environment can realize various infrastructures of different computing models, such as web services, distributed computing, and grid computing infrastructures. [0121] A computer program (also known, for example, as a program, software, software application, software module, software unit, script, or code) may be written in any form of programming language, including compiled languages or interpreted, declarative or procedural languages, and can be implemented in any form, including as a standalone program or as a module, component, subroutine, object, or other unit suitable for use in a computing environment. A program can be stored in a part of a file that contains other programs or data (for example, one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in several coordinated files ( for example, files that store one or more modules, subprograms, or pieces of code). A computer program can run on one computer or on multiple computers that are located in one location or distributed in multiple locations and interconnected by a communication network. [0122] Processors for executing a computer program include, by way of example, general and special purpose microprocessors, and any one or more processors of any type of digital computer. Generally, a processor will receive instructions and data from read-only memory or random access memory, or both. The essential elements of a computer are a processor to perform actions according to instructions and one or more memory devices to store instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data. A computer can be embedded in another device, for example, a mobile device, a personal digital assistant (PDA), a game console, a Global Positioning System (GPS) receiver or a portable storage device. Devices suitable for storing computer program instructions and data include non-volatile memory, memory media and devices, including, by way of example, semiconductor memory devices, magnetic disks, and magneto-optical disks. Processor and memory can be supplemented by, or incorporated into, special purpose logic circuitry. [0123] Mobile devices may include handsets, user equipment (EU), mobile phones (eg smart phones), tablets, wearable devices (eg smart watches and smart glasses), devices implanted within the human body (eg biosensors, cochlear implants) or other types of mobile devices. Mobile devices can communicate wirelessly (for example, using radio frequency (RF) signals) to various communication networks (described below). Mobile devices can include sensors to determine characteristics of the mobile device's current environment. Sensors can include cameras, microphones, proximity sensors, GPS sensors, motion sensors, accelerometers, ambient light sensors, humidity sensors, gyroscopes, compasses, barometers, fingerprint sensors, facial recognition systems, RF sensors ( eg WiFi and cellular radio), thermal sensors or other types of sensors. For example, cameras can include a forward-facing or backward-facing camera with moving or fixed lenses, a flash, an image sensor, and an image processor. The camera may be a megapixel camera capable of capturing details for facial and/or iris recognition. The camera, together with a data processor and authentication information stored in memory or accessed remotely, can form a facial recognition system. The facial recognition system or one or more sensors, eg microphones, motion sensors, accelerometers, GPS sensors or RF sensors, can be used for user authentication. [0124] To provide interaction with a user, embodiments can be implemented on a computer with a display device and an input device, for example, a liquid crystal display (LCD) or organic light-emitting diode display (OLED) / virtual reality (VR) / augmented reality (AR) to display information to the user and a touch screen, keyboard and pointing device by which the user can provide input to the computer. Other types of devices can also be used to provide interaction with a user; for example, the feedback provided to the user may be any form of sensory feedback, for example, visual feedback, auditory feedback, or tactile feedback; and user input can be received in any form, including acoustic, speech or tactile input. In addition, a computer can interact with a user by sending documents to and receiving documents from a device that is used by the user; for example, sending web pages to a web browser on a user's client device in response to requests received from the web browser. [0125] Embodiments may be implemented using computing devices interconnected by any form or means of wired or wireless digital data communication (or combination thereof), for example, a communication network. Examples of interconnected devices are a client and a server that are usually remote from each other and that normally interact through a communication network. A customer, for example, a mobile device, can carry out transactions itself, with a server or through a server, for example, performing purchase, sale, payment, delivery, shipment or loan transactions, or by authorizing them. Such transactions can be real-time, so that an action and a response are temporarily close together; for example, an individual perceives the action and the response occurring substantially simultaneously, the time difference for a response after the individual's action is less than 1 millisecond (ms) or less than 1 second (s), or the response is without delay intentional, considering the processing limitations of the system. [0126] Examples of communication networks include a local area network (LAN), a radio access network (RAN), a metropolitan area network (MAN) and a wide area network (WAN). The communication network can include all or part of the Internet, another communication network or a combination of communication networks. Information can be transmitted over the communication network in accordance with various protocols and standards, including Long Term Evolution (LTE), 5G, IEEE 802, Internet Protocol (IP) or other protocols or combinations of protocols. The communication network can transmit voice, video, biometric or authentication data or other information between connected computing devices. [0127] Features described as separate implementations can be implemented, in combination, in a single implementation, while features described as a single implementation can be implemented in multiple implementations, separately or in any suitable sub-combination. Operations described and claimed in a specific order should not be understood as requiring the specific order, nor that all illustrated operations be performed (some operations may be optional). As appropriate, multitasking or parallel processing (or a combination of multitasking and parallel processing) can be performed.
权利要求:
Claims (6) [0001] 1. METHOD (400) OF DATA PROCESSING, comprising the steps of: obtaining (402, S101), by a reliable protocol node comprising two strands and in a service consensus stage, a pre-processing block ready for validation in a current consensus cycle, the trust protocol node pre-configuring a processor to perform an operation involved in a service sending stage; validate (404, S101), by the trust protocol node, the pre-block - processing during a service consensus stage using a first strand of the two strands; characterized by further comprising the steps of: determining a consensus parameter corresponding to the pre-processing block ready for validation; if it is determined (406) that the block pre-processing is validated, start validating (408, S102), by the trust protocol node, a next pre-processing block ready for validation using the first strand; and obtaining, based on the determined consensus parameter corresponding to the validation-ready preprocessing block, a consensus parameter corresponding to a next adjacent validation-ready preprocessing block; In response to obtaining the consensus parameter corresponding to the next preprocessing block ready for adjacent validation, perform (410, S102), in parallel, by the trusted protocol node, data processing on service data in the preprocessor block. -processing validated during the service submission stage using the second strand with the service consensus stage and validating the next preprocessing block ready for adjacent validation. [0002] 2. METHOD (400), according to claim 1, characterized by obtaining the pre-processing block ready for validation including the trust protocol node generating the pre-processing block ready for validation based on the stored service data by the trusting protocol node or getting the preprocessing block ready for validation from another trusting protocol node. [0003] 3. METHOD (400) according to claim 1, characterized by performing data processing on service data in the validated pre-processing block in parallel specifically comprising: performing data processing on service data in the pre-processing block. -processing validated in parallel using a predetermined processor. [0004] 4. METHOD (400) according to claim 3, characterized by performing data processing on the service data in the validated pre-processing block using a predetermined processor specifically comprising the steps of: for the validated pre-processing block , performing the following operations: invoking a processor to obtain a storage parameter, wherein the storage parameter comprises a storage location; and determining, based on the storage parameter, a ready-to-process preprocessing block corresponding to the storage parameter and storing service data of the determined ready-to-process preprocessing block in the storage location. [0005] 5. METHOD (400), according to any one of claims 1 to 4, characterized in that the method further comprises the step of: after determining that the pre-processing block is validated, determining, based on the pre-processing block and in the storage parameter of the preprocessing block, a storage parameter of a next preprocessing block ready for adjacent validation and storing the storage parameter of the next preprocessing block ready for adjacent validation. [0006] 6. METHOD (400), according to claim 1, characterized by the validation initialization of a next pre-processing block ready for validation, comprising: when the consensus parameter corresponding to the next adjacent pre-processing block ready for validation is obtained, start, based on the obtained consensus parameter, to validate the next preprocessing block ready for adjacent validation.
类似技术:
公开号 | 公开日 | 专利标题 BR112019008775B1|2021-08-17|DATA PROCESSING METHOD JP2020512611A|2020-04-23|Data processing method and device based on block chain US20200005255A1|2020-01-02|Blockchain-based data processing method and device US11057493B2|2021-07-06|Multi-server node service processing and consensus method and device KR102095326B1|2020-04-01|Communication method and communication device between blockchain nodes US10789243B2|2020-09-29|Database state determining method and device, and consistency verifying method and device US10790981B2|2020-09-29|Method and apparatus for verifying block data in a blockchain EP3596650A1|2020-01-22|Service data processing method and device, and service processing method and device US10884767B2|2021-01-05|Service processing methods and devices CN109845224B|2021-09-28|Electronic device and method for operating an electronic device US10671366B2|2020-06-02|App program running method and apparatus KR102377347B1|2022-03-21|Blockchain-based data processing method and device
同族专利:
公开号 | 公开日 KR20190067195A|2019-06-14| US10686803B2|2020-06-16| SG11201903895XA|2019-05-30| JP2020503728A|2020-01-30| CN107402824B|2020-06-02| CN107402824A|2017-11-28| EP3520052A1|2019-08-07| AU2020289748A1|2021-01-21| CA3042470A1|2018-12-06| RU2721402C1|2020-05-19| US20190215151A1|2019-07-11| TW201904229A|2019-01-16| WO2018222927A1|2018-12-06| US20180351732A1|2018-12-06| MX2019005085A|2019-08-12| BR112019008775A2|2020-02-04| US10785231B2|2020-09-22| JP6859509B2|2021-04-14| EP3745642A1|2020-12-02| CA3042470C|2021-05-04| AU2018278316B2|2020-10-08| AU2018278316A1|2019-05-23| ES2809204T3|2021-03-03| TWI697227B|2020-06-21| KR102239589B1|2021-04-15| EP3520052B1|2020-07-22| PH12019500979A1|2020-01-20| ZA201902729B|2020-05-27| SG10202105569SA|2021-07-29|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 US6937599B1|1999-10-21|2005-08-30|Matsushita Electric Industrial Co., Ltd.|Data source, data conversion device, inverse data conversion device, auxiliary data file generation device, reception method, medium and information aggregate| US7979626B2|2008-05-13|2011-07-12|Microsoft Corporation|Flash recovery employing transaction log| ES2836139T3|2013-11-19|2021-06-24|Circle Line International Ltd|Block mining procedures and apparatus| WO2016046820A1|2014-09-23|2016-03-31|Spondoolies Tech Ltd.|System and method for providing shared hash engines architecture for a bitcoin block chain| US10409827B2|2014-10-31|2019-09-10|21, Inc.|Digital currency mining circuitry having shared processing logic| US9436923B1|2015-02-26|2016-09-06|Skuchain, Inc.|Tracking unitization occurring in a supply chain| US10230756B2|2015-11-25|2019-03-12|International Business Machines Corporation|Resisting replay attacks efficiently in a permissioned and privacy-preserving blockchain network| US10805393B2|2015-12-02|2020-10-13|Olea Networks, Inc.|System and method for data management structure using auditable delta records in a distributed environment| US10521780B1|2015-12-16|2019-12-31|United Services Automobile Association |Blockchain based transaction management| US10255108B2|2016-01-26|2019-04-09|International Business Machines Corporation|Parallel execution of blockchain transactions| US10362058B2|2016-05-13|2019-07-23|Vmware, Inc|Secure and scalable data transfer using a hybrid blockchain-based approach| CN106445711B|2016-08-28|2019-04-30|杭州云象网络技术有限公司|A kind of Byzantine failure tolerance common recognition method applied to block chain| US10719771B2|2016-11-09|2020-07-21|Cognitive Scale, Inc.|Method for cognitive information processing using a cognitive blockchain architecture| US20180158034A1|2016-12-07|2018-06-07|International Business Machines Corporation|Dynamic reordering of blockchain transactions to optimize performance and scalability| CN106603698A|2016-12-28|2017-04-26|北京果仁宝科技有限公司|Block chain consensus method based on DPOS and nodes| US20190130394A1|2017-04-17|2019-05-02|Jeff STOLLMAN|Systems and Methods to Validate Transactions For Inclusion in Electronic Blockchains| CN107395353B|2017-04-24|2020-01-31|阿里巴巴集团控股有限公司|block chain consensus method and device| US10896169B2|2017-05-12|2021-01-19|International Business Machines Corporation|Distributed system, computer program product and method| US20180337769A1|2017-05-16|2018-11-22|Arm Ltd.|Blockchain for securing and/or managing iot network-type infrastructure| CN107402824B|2017-05-31|2020-06-02|创新先进技术有限公司|Data processing method and device| CN107396180A|2017-08-29|2017-11-24|北京小米移动软件有限公司|Video creating method and device based on mobile terminal| US11139979B2|2017-12-18|2021-10-05|Koninklijke Kpn N.V.|Primary and secondary blockchain device| US10250381B1|2018-02-22|2019-04-02|Capital One Services, Llc|Content validation using blockchain| RU181439U1|2018-04-06|2018-07-13|Оксана Валерьевна Кириченко|Decentralized technology platform for storing and exchanging transaction data in a distributed computing network|CN107402824B|2017-05-31|2020-06-02|创新先进技术有限公司|Data processing method and device| CN111866008A|2017-07-14|2020-10-30|创新先进技术有限公司|Service data processing method, service processing method and equipment| CN108182635A|2017-12-18|2018-06-19|深圳前海微众银行股份有限公司|Block chain common recognition method, system and computer readable storage medium| CN110083437A|2018-01-25|2019-08-02|北京欧链科技有限公司|Handle the method and device of block chain affairs| CN108600839B|2018-05-02|2020-06-02|中广热点云科技有限公司|Copyright video full-network viewing recording system based on CBC-PBFT consensus mechanism| CN110472971A|2018-05-09|2019-11-19|厦门本能管家科技有限公司|A kind of two process block hitting method and system| CN108776897B|2018-06-05|2020-04-21|腾讯科技(深圳)有限公司|Data processing method, device, server and computer readable storage medium| US20190379754A1|2018-06-06|2019-12-12|International Business Machines Corporation|Proxy agents and proxy ledgers on a blockchain| CN108846659B|2018-06-13|2021-09-14|深圳前海微众银行股份有限公司|Block chain-based transfer method and device and storage medium| CN109002297B|2018-07-16|2020-08-11|百度在线网络技术(北京)有限公司|Deployment method, device, equipment and storage medium of consensus mechanism| CN110795227A|2018-08-03|2020-02-14|北京天能博信息科技有限公司|Data processing method of block chain and related equipment| CN109376020B|2018-09-18|2021-02-12|中国银行股份有限公司|Data processing method, device and storage medium under multi-block chain interaction concurrence| CN109714398B|2018-12-11|2021-09-21|新华三技术有限公司|Data storage method and device| CN109714412B|2018-12-25|2021-08-10|深圳前海微众银行股份有限公司|Block synchronization method, device, equipment and computer readable storage medium| EP3593249B1|2019-03-18|2021-06-23|Advanced New Technologies Co., Ltd.|System and method for ending view change protocol| US11269858B2|2019-03-26|2022-03-08|International Business Machines Corporation|Information management in a decentralized database including a fast path service| CN111095325A|2019-04-12|2020-05-01|阿里巴巴集团控股有限公司|Parallel execution of transactions in a distributed ledger system| KR102289612B1|2019-04-12|2021-08-18|어드밴스드 뉴 테크놀로지스 씨오., 엘티디.|Parallel execution of transactions in a distributed ledger system| US10970858B2|2019-05-15|2021-04-06|International Business Machines Corporation|Augmented reality for monitoring objects to decrease cross contamination between different regions| US20200409941A1|2019-06-26|2020-12-31|Indian Institute Of Technology Bombay|Method for scaling computation in blockchain by delaying transaction execution| CN110675255B|2019-08-30|2021-04-02|创新先进技术有限公司|Method and apparatus for concurrently executing transactions in a blockchain| CN110532228A|2019-09-02|2019-12-03|深圳市网心科技有限公司|A kind of method, system, equipment and the readable storage medium storing program for executing of block chain reading data| CN110648136A|2019-09-10|2020-01-03|杭州秘猿科技有限公司|Consensus and transaction synchronous parallel processing method and device and electronic equipment| CN110728578A|2019-09-29|2020-01-24|南京金宁汇科技有限公司|Parallel execution method, system and storage medium for block chain transaction| CN111522648B|2020-07-03|2020-10-09|支付宝信息技术有限公司|Transaction processing method and device for block chain and electronic equipment| CN113204432A|2021-02-03|2021-08-03|支付宝信息技术有限公司|Transaction processing method and device in block chain and electronic equipment|
法律状态:
2021-04-06| B25A| Requested transfer of rights approved|Owner name: ADVANTAGEOUS NEW TECHNOLOGIES CO., LTD. (KY) | 2021-04-27| B25A| Requested transfer of rights approved|Owner name: ADVANCED NEW TECHNOLOGIES CO., LTD. (KY) | 2021-07-27| B09A| Decision: intention to grant [chapter 9.1 patent gazette]| 2021-08-17| B16A| Patent or certificate of addition of invention granted [chapter 16.1 patent gazette]|Free format text: PRAZO DE VALIDADE: 20 (VINTE) ANOS CONTADOS A PARTIR DE 31/05/2018, OBSERVADAS AS CONDICOES LEGAIS. |
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 CN201710397591.4|2017-05-31| CN201710397591.4A|CN107402824B|2017-05-31|2017-05-31|Data processing method and device| PCT/US2018/035478|WO2018222927A1|2017-05-31|2018-05-31|Blockchain data processing method and apparatus| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|